Performance Testing of a Parallel Multiblock CFD Solver
Identifieur interne : 009316 ( Main/Exploration ); précédent : 009315; suivant : 009317Performance Testing of a Parallel Multiblock CFD Solver
Auteurs : David Kerlick [États-Unis] ; Eric Dillon [États-Unis] ; David Levine [États-Unis]Source :
- The International Journal of High Performance Computing Applications [ 1094-3420 ] ; 2001-02.
English descriptors
- Teeft :
- Aiaa proceedings, Aspect ratio, Average number, Better load balancing, Boeing company, Cache, Coalescing, Coarse grain, Common divisor, Compaq, Computational, Computer science, Convergence, Cray, Cray cray cray, Cray vector systems, Design processes, Digital fortran version, Disk space, Fine grain, Fine grid, Flow field, Full speed, Grain time speedup, Grid, Grid aspect ratio, Grid number points, Grid points, Grid zone, Grid zones, Hatay, High performance, Hsct, Inner loops, Iteration, Jespersen, Kayak, Larger number, Larger numbers, Largest grid zone, Machine comparison number, Main memory, Medium grid, Memory bandwidth, Methods support coalescing, More processors, Multigrid, Multigrid method, Multigrid scheme, Multipartition, Multipartitioning, Multiple processors, Nasa, Nasa ames, Nasa ames research center, Node performance, Number points, Origin system, Origin systems, Other systems, Overflow, Overflow code, Overflow offer, Parallel directives, Parallel hardware, Parallel implementations, Partitioning, Partitioning scheme, Partitioning strategies, Performance degradation, Performance evaluation, Processor, Programming model, Pulliam, Risc systems, Sequential version, Serial version, Single grid, Single grid zone, Single processor, Small amount, Small grid zones, Smaller numbers, Solver, Speedup, Strategy sequential, Test case, Test cases, Test problems, Threshold number, Total grid, Total number, Turbulence model, Unipartitioning, Unusual representation, Vector architecture, Vector supercomputers, Virtual memory, Viscous compressible flow equations, Wingbody, Wingbody case, Wingbody problem, Wingbody test case, Wingbody timings, Zone.
Abstract
A distributed-memory version of the OVERFLOW computational fluid dynamics code was evaluated on several parallel systems and compared with other approaches using test cases provided by the NASA-Boeing High-Speed Civil Transport program. A principal goal was to develop partitioning and load-balancing strategies that led to a reduction in computation time. We found multipartitioning, in which the aspect ratio of the multipartition is close to the aspect ratio of the grid zone, offered the best performance. The (uniprocessor) performance of the CRAY vector systems was superior to all other systems tested. However, the distributed-memory version when run on an SGI Origin system offers a price performance advantage over the CRAY vector systems.Performance on personal computer systems is promising but faces several hurdles.
Url:
DOI: 10.1177/109434200101500103
Affiliations:
Links toward previous steps (curation, corpus...)
- to stream Istex, to step Corpus: 000384
- to stream Istex, to step Curation: 000382
- to stream Istex, to step Checkpoint: 001E58
- to stream Main, to step Merge: 009838
- to stream Main, to step Curation: 009316
Le document en format XML
<record><TEI wicri:istexFullTextTei="biblStruct"><teiHeader><fileDesc><titleStmt><title xml:lang="en">Performance Testing of a Parallel Multiblock CFD Solver</title>
<author wicri:is="90%"><name sortKey="Kerlick, David" sort="Kerlick, David" uniqKey="Kerlick D" first="David" last="Kerlick">David Kerlick</name>
</author>
<author wicri:is="90%"><name sortKey="Dillon, Eric" sort="Dillon, Eric" uniqKey="Dillon E" first="Eric" last="Dillon">Eric Dillon</name>
</author>
<author wicri:is="90%"><name sortKey="Levine, David" sort="Levine, David" uniqKey="Levine D" first="David" last="Levine">David Levine</name>
</author>
</titleStmt>
<publicationStmt><idno type="wicri:source">ISTEX</idno>
<idno type="RBID">ISTEX:10F89E9E8C955141EF1B7E305BA97BB38B570610</idno>
<date when="2001" year="2001">2001</date>
<idno type="doi">10.1177/109434200101500103</idno>
<idno type="url">https://api.istex.fr/ark:/67375/M70-X79RBH0Z-6/fulltext.pdf</idno>
<idno type="wicri:Area/Istex/Corpus">000384</idno>
<idno type="wicri:explorRef" wicri:stream="Istex" wicri:step="Corpus" wicri:corpus="ISTEX">000384</idno>
<idno type="wicri:Area/Istex/Curation">000382</idno>
<idno type="wicri:Area/Istex/Checkpoint">001E58</idno>
<idno type="wicri:explorRef" wicri:stream="Istex" wicri:step="Checkpoint">001E58</idno>
<idno type="wicri:doubleKey">1094-3420:2001:Kerlick D:performance:testing:of</idno>
<idno type="wicri:Area/Main/Merge">009838</idno>
<idno type="wicri:Area/Main/Curation">009316</idno>
<idno type="wicri:Area/Main/Exploration">009316</idno>
</publicationStmt>
<sourceDesc><biblStruct><analytic><title level="a" type="main" xml:lang="en">Performance Testing of a Parallel Multiblock CFD Solver</title>
<author wicri:is="90%"><name sortKey="Kerlick, David" sort="Kerlick, David" uniqKey="Kerlick D" first="David" last="Kerlick">David Kerlick</name>
<affiliation wicri:level="2"><country xml:lang="fr">États-Unis</country>
<placeName><region type="state">Washington (État)</region>
</placeName>
<wicri:cityArea>Mathematics and Computing Technology, Boeing Company, Seattle</wicri:cityArea>
</affiliation>
</author>
<author wicri:is="90%"><name sortKey="Dillon, Eric" sort="Dillon, Eric" uniqKey="Dillon E" first="Eric" last="Dillon">Eric Dillon</name>
<affiliation wicri:level="2"><country xml:lang="fr">États-Unis</country>
<placeName><region type="state">Washington (État)</region>
</placeName>
<wicri:cityArea>Mathematics and Computing Technology, Boeing Company, Seattle</wicri:cityArea>
</affiliation>
</author>
<author wicri:is="90%"><name sortKey="Levine, David" sort="Levine, David" uniqKey="Levine D" first="David" last="Levine">David Levine</name>
<affiliation wicri:level="2"><country xml:lang="fr">États-Unis</country>
<placeName><region type="state">Washington (État)</region>
</placeName>
<wicri:cityArea>Rosetta Inpharmatics, Kirkland</wicri:cityArea>
</affiliation>
</author>
</analytic>
<monogr></monogr>
<series><title level="j">The International Journal of High Performance Computing Applications</title>
<idno type="ISSN">1094-3420</idno>
<idno type="eISSN">1741-2846</idno>
<imprint><publisher>Sage Publications</publisher>
<pubPlace>Sage CA: Thousand Oaks, CA</pubPlace>
<date type="published" when="2001-02">2001-02</date>
<biblScope unit="volume">15</biblScope>
<biblScope unit="issue">1</biblScope>
<biblScope unit="page" from="22">22</biblScope>
<biblScope unit="page" to="35">35</biblScope>
</imprint>
<idno type="ISSN">1094-3420</idno>
</series>
</biblStruct>
</sourceDesc>
<seriesStmt><idno type="ISSN">1094-3420</idno>
</seriesStmt>
</fileDesc>
<profileDesc><textClass><keywords scheme="Teeft" xml:lang="en"><term>Aiaa proceedings</term>
<term>Aspect ratio</term>
<term>Average number</term>
<term>Better load balancing</term>
<term>Boeing company</term>
<term>Cache</term>
<term>Coalescing</term>
<term>Coarse grain</term>
<term>Common divisor</term>
<term>Compaq</term>
<term>Computational</term>
<term>Computer science</term>
<term>Convergence</term>
<term>Cray</term>
<term>Cray cray cray</term>
<term>Cray vector systems</term>
<term>Design processes</term>
<term>Digital fortran version</term>
<term>Disk space</term>
<term>Fine grain</term>
<term>Fine grid</term>
<term>Flow field</term>
<term>Full speed</term>
<term>Grain time speedup</term>
<term>Grid</term>
<term>Grid aspect ratio</term>
<term>Grid number points</term>
<term>Grid points</term>
<term>Grid zone</term>
<term>Grid zones</term>
<term>Hatay</term>
<term>High performance</term>
<term>Hsct</term>
<term>Inner loops</term>
<term>Iteration</term>
<term>Jespersen</term>
<term>Kayak</term>
<term>Larger number</term>
<term>Larger numbers</term>
<term>Largest grid zone</term>
<term>Machine comparison number</term>
<term>Main memory</term>
<term>Medium grid</term>
<term>Memory bandwidth</term>
<term>Methods support coalescing</term>
<term>More processors</term>
<term>Multigrid</term>
<term>Multigrid method</term>
<term>Multigrid scheme</term>
<term>Multipartition</term>
<term>Multipartitioning</term>
<term>Multiple processors</term>
<term>Nasa</term>
<term>Nasa ames</term>
<term>Nasa ames research center</term>
<term>Node performance</term>
<term>Number points</term>
<term>Origin system</term>
<term>Origin systems</term>
<term>Other systems</term>
<term>Overflow</term>
<term>Overflow code</term>
<term>Overflow offer</term>
<term>Parallel directives</term>
<term>Parallel hardware</term>
<term>Parallel implementations</term>
<term>Partitioning</term>
<term>Partitioning scheme</term>
<term>Partitioning strategies</term>
<term>Performance degradation</term>
<term>Performance evaluation</term>
<term>Processor</term>
<term>Programming model</term>
<term>Pulliam</term>
<term>Risc systems</term>
<term>Sequential version</term>
<term>Serial version</term>
<term>Single grid</term>
<term>Single grid zone</term>
<term>Single processor</term>
<term>Small amount</term>
<term>Small grid zones</term>
<term>Smaller numbers</term>
<term>Solver</term>
<term>Speedup</term>
<term>Strategy sequential</term>
<term>Test case</term>
<term>Test cases</term>
<term>Test problems</term>
<term>Threshold number</term>
<term>Total grid</term>
<term>Total number</term>
<term>Turbulence model</term>
<term>Unipartitioning</term>
<term>Unusual representation</term>
<term>Vector architecture</term>
<term>Vector supercomputers</term>
<term>Virtual memory</term>
<term>Viscous compressible flow equations</term>
<term>Wingbody</term>
<term>Wingbody case</term>
<term>Wingbody problem</term>
<term>Wingbody test case</term>
<term>Wingbody timings</term>
<term>Zone</term>
</keywords>
</textClass>
<langUsage><language ident="en">en</language>
</langUsage>
</profileDesc>
</teiHeader>
<front><div type="abstract" xml:lang="en">A distributed-memory version of the OVERFLOW computational fluid dynamics code was evaluated on several parallel systems and compared with other approaches using test cases provided by the NASA-Boeing High-Speed Civil Transport program. A principal goal was to develop partitioning and load-balancing strategies that led to a reduction in computation time. We found multipartitioning, in which the aspect ratio of the multipartition is close to the aspect ratio of the grid zone, offered the best performance. The (uniprocessor) performance of the CRAY vector systems was superior to all other systems tested. However, the distributed-memory version when run on an SGI Origin system offers a price performance advantage over the CRAY vector systems.Performance on personal computer systems is promising but faces several hurdles.</div>
</front>
</TEI>
<affiliations><list><country><li>États-Unis</li>
</country>
<region><li>Washington (État)</li>
</region>
</list>
<tree><country name="États-Unis"><region name="Washington (État)"><name sortKey="Kerlick, David" sort="Kerlick, David" uniqKey="Kerlick D" first="David" last="Kerlick">David Kerlick</name>
</region>
<name sortKey="Dillon, Eric" sort="Dillon, Eric" uniqKey="Dillon E" first="Eric" last="Dillon">Eric Dillon</name>
<name sortKey="Levine, David" sort="Levine, David" uniqKey="Levine D" first="David" last="Levine">David Levine</name>
</country>
</tree>
</affiliations>
</record>
Pour manipuler ce document sous Unix (Dilib)
EXPLOR_STEP=$WICRI_ROOT/Wicri/Lorraine/explor/InforLorV4/Data/Main/Exploration
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 009316 | SxmlIndent | more
Ou
HfdSelect -h $EXPLOR_AREA/Data/Main/Exploration/biblio.hfd -nk 009316 | SxmlIndent | more
Pour mettre un lien sur cette page dans le réseau Wicri
{{Explor lien |wiki= Wicri/Lorraine |area= InforLorV4 |flux= Main |étape= Exploration |type= RBID |clé= ISTEX:10F89E9E8C955141EF1B7E305BA97BB38B570610 |texte= Performance Testing of a Parallel Multiblock CFD Solver }}
This area was generated with Dilib version V0.6.33. |